Goto

Collaborating Authors

 random permutation


Spanning Tree Autoregressive Visual Generation

Lee, Sangkyu, Lee, Changho, Han, Janghoon, Song, Hosung, You, Tackgeun, Lim, Hwasup, Choi, Stanley Jungkyu, Lee, Honglak, Yu, Youngjae

arXiv.org Artificial Intelligence

W e present Spanning Tree Autoregressive (STAR) modeling, which can incorporate prior knowledge of images, such as center bias and locality, to maintain sampling performance while also providing sufficiently flexible sequence orders to accommodate image editing at inference. Approaches that expose randomly permuted sequence orders to conventional autoregressive (AR) models in visual generation for bidirectional context either suffer from a decline in performance or compromise the flexibility in sequence order choice at inference. Instead, STAR utilizes traversal orders of uniform spanning trees sampled in a lattice defined by the positions of image patches. Traversal orders are obtained through breadth-first search, allowing us to efficiently construct a spanning tree whose traversal order ensures that the connected partial observation of the image appears as a prefix in the sequence through rejection sampling. Through the tailored yet structured randomized strategy compared to random permutation, STAR preserves the capability of postfix completion while maintaining sampling performance without any significant changes to the model architecture widely adopted in the language AR modeling.




Appendix: Permutation-Invariant V ariational Autoencoder for Graph-Level Representation Learning

Neural Information Processing Systems

Since we apply the row-wise softmax in Eq. (7), Each self attention layer was followed by a point-wise fully connected neural network with two layers (1024 hidden dim) and a residual connection. We set the graph embedding dimension to 64. We tried different weightings of reconstruction and permutation matrix penalty loss to maximize the reconstruction accuracy with a discretized permutation matrix, while enabling stable training. In section 4.1 we describe how distances in the graph embedding space One important property of the GED is its invariance to the node ordering of graphs that are compared. As discussed in section 2.2 (Key architectural properties), we carefully This is exactly what we would expect.


A Distance Measure for Random Permutation Set: From the Layer-2 Belief Structure Perspective

Cheng, Ruolan, Deng, Yong, Moral, Serafín, Trillo, José Ramón

arXiv.org Artificial Intelligence

Random permutation set (RPS) is a recently proposed framework designed to represent order-structured uncertain information. Measuring the distance between permutation mass functions is a key research topic in RPS theory (RPST). This paper conducts an in-depth analysis of distances between RPSs from two different perspectives: random finite set (RFS) and transferable belief model (TBM). Adopting the layer-2 belief structure interpretation of RPS, we regard RPST as a refinement of TBM, where the order in the ordered focus set represents qualitative propensity. Starting from the permutation, we introduce a new definition of the cumulative Jaccard index to quantify the similarity between two permutations and further propose a distance measure method for RPSs based on the cumulative Jaccard index matrix. The metric and structural properties of the proposed distance measure are investigated, including the positive definiteness analysis of the cumulative Jaccard index matrix, and a correction scheme is provided. The proposed method has a natural top-weightiness property: inconsistencies between higher-ranked elements tend to result in greater distance values. Two parameters are provided to the decision-maker to adjust the weight and truncation depth. Several numerical examples are used to compare the proposed method with the existing method. The experimental results show that the proposed method not only overcomes the shortcomings of the existing method and is compatible with the Jousselme distance, but also has higher sensitivity and flexibility.




Appendix: Permutation-Invariant V ariational Autoencoder for Graph-Level Representation Learning

Neural Information Processing Systems

Since we apply the row-wise softmax in Eq. (7), Each self attention layer was followed by a point-wise fully connected neural network with two layers (1024 hidden dim) and a residual connection. We set the graph embedding dimension to 64. We tried different weightings of reconstruction and permutation matrix penalty loss to maximize the reconstruction accuracy with a discretized permutation matrix, while enabling stable training. In section 4.1 we describe how distances in the graph embedding space One important property of the GED is its invariance to the node ordering of graphs that are compared. As discussed in section 2.2 (Key architectural properties), we carefully This is exactly what we would expect.


Multifractal hopscotch in "Hopscotch" by Julio Cortazar

Dec, Jakub, Dolina, Michał, Drożdż, Stanisław, Kwapień, Jarosław, Stanisz, Tomasz

arXiv.org Artificial Intelligence

Punctuation is the main factor introducing correlations in natural language written texts and it crucially impacts their overall effectiveness, expressiveness, and readability. Punctuation marks at the end of sentences are of particular importance as their distribution can determine various complexity features of written natural language. Here, the sentence length variability (SLV) time series representing "Hopscotch" by Julio Cortazar are subjected to quantitative analysis with an attempt to identify their distribution type, long-memory effects, and potential multiscale patterns. The analyzed novel is an important and innovative piece of literature whose essential property is freedom of movement between its building blocks given to a reader by the author. The statistical consequences of this freedom are closely investigated in both the original, Spanish version of the novel, and its translations into English and Polish. Clear evidence of rich multifractality in the SLV dynamics, with a left-sided asymmetry, however, is observed in all three language versions as well as in the versions with differently ordered chapters.


Functional relevance based on the continuous Shapley value

Delicado, Pedro, Pachón-García, Cristian

arXiv.org Machine Learning

The presence of Artificial Intelligence (AI) in our society is increasing, which brings with it the need to understand the behaviour of AI mechanisms, including machine learning predictive algorithms fed with tabular data, text, or images, among other types of data. This work focuses on interpretability of predictive models based on functional data. Designing interpretability methods for functional data models implies working with a set of features whose size is infinite. In the context of scalar on function regression, we propose an interpretability method based on the Shapley value for continuous games, a mathematical formulation that allows to fairly distribute a global payoff among a continuous set players. The method is illustrated through a set of experiments with simulated and real data sets. The open source Python package ShapleyFDA is also presented.